Backport of client: use RPC address and not serf after initial Consul discovery into release/1.5.x #16296
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Backport
This PR is auto-generated from #16217 to be assessed for backporting due to the inclusion of the label backport/1.5.x.
The below text is copied from the body of the original PR.
Fixes #16211
Nomad servers can advertise independent IP addresses for
serf
andrpc
. Somewhat unexpectedly, theserf
address is also used for both Serf and server-to-server RPC communication (including Raft RPC). The address advertised forrpc
is only used for client-to-server RPC. This split was introduced intentionally in Nomad 0.8.When clients are using Consul discovery for connecting to servers, they get an initial discovery set from Consul and use the correct
rpc
tag in Consul to get a list of adddresses for servers. The client then makes aStatus.Peers
RPC to get the list of those servers that are raft peers. But this endpoint is shared between servers and clients, and provides the address used for Raft.Most of the time this is harmless because servers will bind on 0.0.0.0 anyways., But in topologies where servers are on a private network and clients are on separate subnets (or even public subnets), clients will make initial contact with the server to get the list of peers but then populate their local server set with unreachable addresses.
Cluster administrators can work around this problem by using
server_join
with specific IP addresses (or DNS names), because theNode.UpdateStatus
endpoint returns the correct set of RPC addresses when updating the node. So once a client has registered, it will get the correct set of RPC addresses.This changeset updates the client logic to query
Status.Members
instead ofStatus.Peers
, and then extract the correctly advertised address and port from the response body.Notes:
server_join
workaround described above.consul
config block) and used AWS EC2 auto-join via tags and that works just fine as well.